Multitasking Capability Versus Learning Efficiency in Neural Network Architectures

نویسندگان

  • Sebastian Musslick
  • Andrew Saxe
  • Kayhan Özcimder
  • Biswadip Dey
  • Greg Henselman
  • Jonathan D. Cohen
چکیده

One of the most salient and well-recognized features of human goal-directed behavior is our limited ability to conduct multiple demanding tasks at once. Previous work has identified overlap between task processing pathways as a limiting factor for multitasking performance in neural architectures. This raises an important question: insofar as shared representation between tasks introduces the risk of cross-talk and thereby limitations in multitasking, why would the brain prefer shared task representations over separate representations across tasks? We seek to answer this question by introducing formal considerations and neural network simulations in which we contrast the multitasking limitations that shared task representations incur with their benefits for task learning. Our results suggest that neural network architectures face a fundamental tradeoff between learning efficiency and multitasking performance in environments with shared structure between tasks.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multi-Step-Ahead Prediction of Stock Price Using a New Architecture of Neural Networks

Modelling and forecasting Stock market is a challenging task for economists and engineers since it has a dynamic structure and nonlinear characteristic. This nonlinearity affects the efficiency of the price characteristics. Using an Artificial Neural Network (ANN) is a proper way to model this nonlinearity and it has been used successfully in one-step-ahead and multi-step-ahead prediction of di...

متن کامل

Reinforcement Learning for Architecture Search by Network Transformation

Deep neural networks have shown effectiveness in many challenging tasks and proved their strong capability in automatically learning good feature representation from raw input. Nonetheless, designing their architectures still requires much human effort. Techniques for automatically designing neural network architectures such as reinforcement learning based approaches recently show promising res...

متن کامل

Smarter initializations in multi-modal neural networks to predict transcription factor binding

Various deep neural methods have been developed to predict transcription factor (TF) binding on DNA regions. In this paper we employ neural-network initialization techniques inspired by domain knowledge to augment current work in TF binding prediction. We train a Convolutional Neural Network (CNN) with the first layer of filters initialized in varying proportions to the values of position weigh...

متن کامل

One Model to Rule them all: Multitask and Multilingual Modelling for Lexical Analysis

|Deep Neural Networks are at the forefront of many state-of-the art approaches to Natural Language Processing (NLP). The field of NLP is currently awash with papers building on this method, to the extent that it has quite aptly been described as a tsunami (Manning, 2015). While a large part of the field is familiar with this family of learning architectures, it is the intention of this thesis t...

متن کامل

Improving Object Classification Using Pose Information

We propose a method that exploits pose information in order to improve object classification. A lot of research has focused in other strategies, such as engineering feature extractors, trying different classifiers and even using transfer learning. Here, we use neural network architectures in a multi-task setup, whose outputs predict both the class and the camera azimuth. We investigate both Mul...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017